Neural Network with Unbounded Activations is Universal Approximator

نویسندگان

  • Sho Sonoda
  • Noboru Murata
چکیده

Abstract This paper presents an investigation of the approximation property of neural networks with unbounded activation functions, such as the rectified linear unit (ReLU), which is the new de-facto standard of deep learning. The ReLU network can be analyzed by the ridgelet transform with respect to Lizorkin distributions. By showing three reconstruction formulas by using the Fourier slice theorem, the Radon transform, and Parseval’s relation, it is shown that a neural network with unbounded activation functions still satisfies the universal approximation property. As an additional consequence, the ridgelet transform, or the backprojection filter in the Radon domain, is what the network learns after backpropagation. Subject to a constructive admissibility condition, the trained network can be obtained by simply discretizing the ridgelet transform, without backpropagation. Numerical examples not only support the consistency of the admissibility condition but also imply that some non-admissible cases result in low-pass filtering.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A trainable transparent universal approximator for defuzzification in Mamdani-type neuro-fuzzy controllers

A novel technique of designing application specific defuzzification strategies with neural learning is presented. The proposed neural architecture considered as a universal defuzzification approximator is validated by showing the convergence when approximating several existing defuzzification strategies. The method is successfully tested with fuzzy controlled reverse driving of a model truck. T...

متن کامل

Universal Approximation , With Fuzzy ART and . Fuzzy ARTMAP

A measure of s k c e s s for any learning algorithm is how u s e ful it is in a variety of learning situations. Those learning algorithms that support universal function approximation can theoretically h e applied to a very large and interesting class of learning problems. Many kinds of neural network architectures have already been shown to support universal approximation. In this paper, we wi...

متن کامل

A Trainable Transparent Universal Approximator for Defuzzi cation in Mamdani Type Neuro-Fuzzy Controllers

|A novel technique of designing application speci c defuzzi cation strategies with neural learning is presented. The proposed neural architecture considered as a universal defuzzi cation approximator is validated by showing the convergence when approximating several existing defuzzi cation strategies. The method is successfully tested with fuzzy controlled reverse driving of a model truck. The ...

متن کامل

Multiplex visibility graphs to investigate recurrent neural network dynamics

A recurrent neural network (RNN) is a universal approximator of dynamical systems, whose performance often depends on sensitive hyperparameters. Tuning them properly may be difficult and, typically, based on a trial-and-error approach. In this work, we adopt a graph-based framework to interpret and characterize internal dynamics of a class of RNNs called echo state networks (ESNs). We design pr...

متن کامل

Universal Approximator Property of the Space of Hyperbolic Tangent Functions

In this paper, first the space of hyperbolic tangent functions is introduced and then the universal approximator property of this space is proved. In fact, by using this space, any nonlinear continuous function can be uniformly approximated with any degree of accuracy. Also, as an application, this space of functions is utilized to design feedback control for a nonlinear dynamical system.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1505.03654  شماره 

صفحات  -

تاریخ انتشار 2015